[Draft] Newton-Schulz via cuSOLVERMp#2706
[Draft] Newton-Schulz via cuSOLVERMp#2706vcherepanov-nv wants to merge 35 commits intoNVIDIA:mainfrom
Conversation
Add a new distributed Newton-Schulz inverse square root API to Transformer Engine's common C library. This wraps the cusolverMpNewtonSchulz library function, following the same pattern as the existing cuBLASMp integration for comm_gemm. New files: - newton_schulz.h: Public C API header with context management and computation functions - newton_schulz/newton_schulz.cpp: Implementation with RAII wrappers for cuSolverMp handles Build integration: - New NVTE_WITH_CUSOLVERMP CMake option and CUSOLVERMP_HOME env var - NVTE_CHECK_CUSOLVERMP error checking macro in logging.h - Conditional compilation guarded by NVTE_WITH_CUSOLVERMP Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> Signed-off-by: Vladimir Cherepanov <vcherepanov@nvidia.com>
Add PyTorch-level bindings for the cuSolverMp Newton-Schulz inverse square root API introduced in the previous commit. New files: - pytorch/csrc/extensions/newton_schulz.cpp: C++ extension wrapping the C API with PyTorch tensor support - pytorch/newton_schulz.py: Python wrapper that extracts NCCL communicator from torch.distributed ProcessGroup - tests/pytorch/distributed/test_newton_schulz.py: pytest launcher - tests/pytorch/distributed/run_newton_schulz.py: distributed test worker with reference implementation for numerical validation Modified files: - pytorch/csrc/extensions.h: Function declarations - pytorch/csrc/extensions/pybind.cpp: pybind11 registrations - pytorch/__init__.py: Public API export Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> Signed-off-by: Vladimir Cherepanov <vcherepanov@nvidia.com>
Fix API mismatches discovered during compilation: - cusolverMpCreate takes (handle*, deviceId, stream), not (handle*, stream) - cusolverMpCreateDeviceGrid takes handle as first arg with different parameter order - Use cusolverMpGridMapping_t (not cusolverMpGridLayout_t) and CUSOLVERMP_GRID_MAPPING_COL_MAJOR - cusolverMpCreateMatrixDesc has different parameter order: (desc*, grid, dtype, M, N, MB, NB, RSRC, CSRC, LLD) - cusolverMpNewtonSchulzDescriptorCreate takes only (nsDesc*) with no iteration/coefficient args - No cusolverMpStreamSet exists; create handle per-call with user stream - cusolverMpNewtonSchulz requires computeType and info parameters - Switch from generic template RAII to explicit deleter structs Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> Signed-off-by: Vladimir Cherepanov <vcherepanov@nvidia.com>
…build Add NVTE_WITH_CUSOLVERMP compiler define and cusolverMp include/library paths to the PyTorch C++ extension build, following the same pattern as NVTE_UB_WITH_MPI and NVTE_ENABLE_NVSHMEM. Without this, the #ifdef NVTE_WITH_CUSOLVERMP guards in the PyTorch extension code would never be active since the define was only set as PRIVATE in the CMake build for the common library. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> Signed-off-by: Vladimir Cherepanov <vcherepanov@nvidia.com>
Two fixes: - Use ProcessGroupNCCL._comm_ptr() to extract the raw NCCL communicator pointer instead of the non-existent get_nccl_comm() method - Pass global matrix dimensions (m, n) from Python to C++ instead of using local tensor dimensions, which would produce incorrect ScaLAPACK block sizes in the distributed computation Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> Signed-off-by: Vladimir Cherepanov <vcherepanov@nvidia.com>
cuSolverMp handle and grid creation are expensive operations. Move them from per-call creation in nvte_newton_schulz into the NVTECusolverMpCtx, which is their natural home — the context exists to encapsulate the grid. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> Signed-off-by: Vladimir Cherepanov <vcherepanov@nvidia.com>
cuSolverMp cannot work with the default CUDA stream. Create a dedicated stream inside nvte_cusolvermp_ctx_create and remove the stream parameter from both C API functions since the context now owns its stream. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> Signed-off-by: Vladimir Cherepanov <vcherepanov@nvidia.com>
The internal dedicated stream was reading the input tensor before the caller's stream had finished producing it, resulting in all-zero output. Add event-based synchronisation: the internal stream waits for the caller's input to be ready, and the caller's stream waits for the output to be written. Replaces the blocking cudaStreamSynchronize. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> Signed-off-by: Vladimir Cherepanov <vcherepanov@nvidia.com>
cuSolverMp is asynchronous and uses the host workspace during multi-GPU execution. The event-based output sync did not block the host, so the local workspace_host vector was destroyed while the GPU was still reading from it. Restore cudaStreamSynchronize to ensure the host workspace remains valid for the full duration of the operation. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> Signed-off-by: Vladimir Cherepanov <vcherepanov@nvidia.com>
Avoid creating and destroying a cudaEvent_t on every nvte_newton_schulz call by making it a persistent member of NVTECusolverMpCtx, matching the existing pattern for the stream. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> Signed-off-by: Vladimir Cherepanov <vcherepanov@nvidia.com>
Replace single event with in_ready and out_ready events. After the cuSolverMp call, record out_ready on the internal stream and make the caller's stream wait on it, ensuring the output tensor is ready before the caller uses it. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> Signed-off-by: Vladimir Cherepanov <vcherepanov@nvidia.com>
Signed-off-by: Vladimir Cherepanov <vcherepanov@nvidia.com>
Signed-off-by: Vladimir Cherepanov <vcherepanov@nvidia.com>
Replace reference-comparison test with a direct arithmetic check: if X is the inverse square root of A, then X @ A @ X must equal the identity matrix. This is more robust and removes the need for a separate reference implementation. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> Signed-off-by: Vladimir Cherepanov <vcherepanov@nvidia.com>
Signed-off-by: Vladimir Cherepanov <vcherepanov@nvidia.com>
Signed-off-by: Vladimir Cherepanov <vcherepanov@nvidia.com>
Signed-off-by: Vladimir Cherepanov <vcherepanov@nvidia.com>
Signed-off-by: Vladimir Cherepanov <vcherepanov@nvidia.com>
for more information, see https://pre-commit.ci
Greptile SummaryThis PR adds a distributed Newton-Schulz matrix orthogonalization API to Transformer Engine by integrating cuSolverMp as a new optional dependency, with PyTorch Python bindings and distributed tests via torchrun. Key findings from this review:
Confidence Score: 2/5
Last reviewed commit: 8d81505 |
| # Check: if X = A^{-1/2}, then X @ A @ X should be the identity matrix | ||
| if rank == 0: | ||
| XXT = X @ X.t() | ||
| I = torch.eye(N, device=XXT.device, dtype=XXT.dtype) | ||
| max_diff = (XXT - I).abs().max().item() | ||
| print(f"Max |X @ X.t() - I|: {max_diff:.6e}", flush=True) |
There was a problem hiding this comment.
verification doesn't match the comment - if X = A^{-1/2}, the check should be X @ A @ X ≈ I, not X @ X.t() ≈ I. The current check verifies X is orthogonal, not that X is the inverse square root of A. Note that A_orig is created on line 76 but never used.
| # Check: if X = A^{-1/2}, then X @ A @ X should be the identity matrix | |
| if rank == 0: | |
| XXT = X @ X.t() | |
| I = torch.eye(N, device=XXT.device, dtype=XXT.dtype) | |
| max_diff = (XXT - I).abs().max().item() | |
| print(f"Max |X @ X.t() - I|: {max_diff:.6e}", flush=True) | |
| # Check: if X = A^{-1/2}, then X @ A @ X should be the identity matrix | |
| XAX = X @ A_orig @ X | |
| I = torch.eye(N, device=XAX.device, dtype=XAX.dtype) | |
| max_diff = (XAX - I).abs().max().item() | |
| print(f"Max |X @ A @ X - I|: {max_diff:.6e}", flush=True) | |
| if torch.allclose(XAX, I, atol=args.atol, rtol=args.rtol): |
| nccl_backend = group._get_backend(torch.device("cuda")) | ||
| return nccl_backend._comm_ptr() |
There was a problem hiding this comment.
uses private PyTorch APIs (_get_backend, _comm_ptr) that may change in future versions
Signed-off-by: Vladimir Cherepanov <vcherepanov@nvidia.com>
Signed-off-by: Vladimir Cherepanov <vcherepanov@nvidia.com>
| quintic_coefficients = [ | ||
| 4.0848, | ||
| -6.8946, | ||
| 2.9270, | ||
| 3.9505, | ||
| -6.3029, | ||
| 2.6377, | ||
| 3.7418, | ||
| -5.5913, | ||
| 2.3037, | ||
| 2.8769, | ||
| -3.1427, | ||
| 1.2046, | ||
| 2.8366, | ||
| -3.0525, | ||
| 1.2012, | ||
| ] | ||
| coefficients = ( | ||
| quintic_coefficients if args.num_iterations == 5 else [1.5, -0.5, 0.0] * args.num_iterations | ||
| ) |
There was a problem hiding this comment.
coefficients mismatch with API defaults - test uses 15 coefficients for 5 iterations, but newton_schulz.py defaults to 5 coefficients. This inconsistency means default API behavior isn't tested.
Signed-off-by: Vladimir Cherepanov <vcherepanov@nvidia.com>
for more information, see https://pre-commit.ci
| * \brief Functions for distributed Newton-Schulz inverse square root. | ||
| * | ||
| * This API is a TE-native binding to the cuSolverMp library. | ||
| * It computes an iterative Newton-Schulz inverse square root | ||
| * approximation on a distributed matrix. |
There was a problem hiding this comment.
Documentation claims this computes "inverse square root" but the test validates orthogonality (X @ X.t() ≈ I), and commit dd1dd0b states "it approximates orthogonal matrix, not inverse square root". If this computes the polar decomposition (orthogonal factor), the documentation should be updated to reflect that. Inverse square root would satisfy X @ A @ X ≈ I, which is different from orthogonality.
| m = x.size(0) * nranks # rows are distributed across ranks | ||
| n = x.size(1) |
There was a problem hiding this comment.
Assumes rows are evenly distributed (m = x.size(0) * nranks) but doesn't validate this. If matrix size isn't divisible by nranks, the computed global size m will be incorrect, leading to wrong results from cuSOLVERMp. Consider adding validation:
| m = x.size(0) * nranks # rows are distributed across ranks | |
| n = x.size(1) | |
| # Global matrix dimensions | |
| # Rows must be evenly distributed across ranks | |
| local_rows = x.size(0) | |
| m = local_rows * nranks | |
| n = x.size(1) |
Then add a validation check that all ranks have the same local_rows via dist.all_reduce.
| num_iterations: int = 5, | ||
| coefficients: Optional[List[float]] = None, | ||
| ) -> None: | ||
| """Compute Newton-Schulz inverse square root in-place on a distributed matrix. |
There was a problem hiding this comment.
Docstring says "inverse square root" but test checks orthogonality. Update to match actual behavior (see comment on header file).
Signed-off-by: Vladimir Cherepanov <vcherepanov@nvidia.com>
| void nvte_newton_schulz(NVTECusolverMpCtx* ctx, int64_t m, int64_t n, NVTETensor x, | ||
| int64_t num_iterations, const float* coefficients, int64_t num_coefficients, | ||
| cudaStream_t caller_stream) { |
There was a problem hiding this comment.
num_coefficients parameter is unused
The num_coefficients parameter is accepted but never referenced in the function body. Neither cusolverMpNewtonSchulz_bufferSize (line 166) nor cusolverMpNewtonSchulz (line 183) receive this value. If cuSolverMp infers the count from num_iterations internally, then num_coefficients is dead code that should be removed from the API. If cuSolverMp actually needs it, then it should be passed to the cuSolverMp calls — otherwise the library may read out of bounds on the coefficients array.
| from transformer_engine.pytorch import optimizers | ||
| from transformer_engine.pytorch.export import onnx_export | ||
| from transformer_engine.pytorch.cross_entropy import parallel_cross_entropy | ||
| from transformer_engine.pytorch.newton_schulz import newton_schulz |
There was a problem hiding this comment.
Unconditional import of optional feature
newton_schulz is unconditionally imported and exported as part of the public API, even when TE is built without NVTE_WITH_CUSOLVERMP. While the function itself raises a runtime error when called, this exposes the symbol to all users and makes it appear as a supported feature in auto-complete and docs. Consider guarding this import behind a check (similar to how other optional features are handled), or at minimum adding a note in the docstring that the function requires NVTE_WITH_CUSOLVERMP=1 at build time.
Note: If this suggestion doesn't match your team's coding style, reply to this and let me know. I'll remember it for next time!
| // Allocate/grow device workspace | ||
| if (ctx->workspace_size < wrksp_size_device) { | ||
| if (ctx->workspace) { | ||
| NVTE_CHECK_CUDA(cudaFree(ctx->workspace)); | ||
| } | ||
| NVTE_CHECK_CUDA(cudaMalloc(&ctx->workspace, wrksp_size_device)); | ||
| ctx->workspace_size = wrksp_size_device; | ||
| } |
There was a problem hiding this comment.
Synchronous cudaFree/cudaMalloc on hot path
cudaFree followed by cudaMalloc inside nvte_newton_schulz will synchronize with the device each time the workspace needs to grow. Since the context is recreated on every call from newton_schulz.py (line 82-86 creates + destroys ctx each invocation), the workspace will never be reused across calls — the grow-only caching here is ineffective. Consider either:
- Allowing callers to keep the context alive across calls, or
- Using
cudaMallocAsync/cudaFreeAsynconctx->streamto avoid synchronous stalls.
| ctx_ptr = tex.cusolvermp_ctx_create(nccl_comm_ptr, nranks, rank) | ||
| try: | ||
| tex.newton_schulz(ctx_ptr, m, n, x, num_iterations, coefficients) | ||
| finally: | ||
| tex.cusolvermp_ctx_destroy(ctx_ptr) |
There was a problem hiding this comment.
Context created/destroyed per call wastes resources
A new NVTECusolverMpCtx is created and destroyed on every invocation of newton_schulz. Context creation involves cudaStreamCreate, two cudaEventCreate calls, cusolverMpCreate, and cusolverMpCreateDeviceGrid — all of which are heavyweight operations. And since the context is destroyed afterward, the grow-only workspace caching in the C++ layer (lines 170-177 of newton_schulz.cpp) is never actually reused.
Consider caching the context (e.g., in a module-level dict keyed by (nccl_comm_ptr, nranks, rank)) and reusing it across calls, or exposing the context lifecycle to callers so they can amortize the cost when calling newton_schulz repeatedly in a training loop.
Signed-off-by: Vladimir Cherepanov <vcherepanov@nvidia.com>
for more information, see https://pre-commit.ci
| assert ( | ||
| len(coefficients) == num_iterations * 3 | ||
| ), f"Unexpected number of coefficients: {len(coefficients)} for {num_iterations} iterations" |
There was a problem hiding this comment.
use ValueError instead of assert for validation - assert can be disabled with Python's -O flag
| assert ( | |
| len(coefficients) == num_iterations * 3 | |
| ), f"Unexpected number of coefficients: {len(coefficients)} for {num_iterations} iterations" | |
| if len(coefficients) != num_iterations * 3: | |
| raise ValueError( | |
| f"Unexpected number of coefficients: {len(coefficients)} for {num_iterations} iterations" | |
| ) |
| if x.dim() != 2: | ||
| raise ValueError(f"Expected 2D tensor, got {x.dim()}D") | ||
| if not x.is_cuda: | ||
| raise ValueError("Input tensor must be on CUDA device") |
There was a problem hiding this comment.
missing contiguity check - C++ code uses data_ptr() which requires contiguous memory. Non-contiguous tensors will cause incorrect results.
| if x.dim() != 2: | |
| raise ValueError(f"Expected 2D tensor, got {x.dim()}D") | |
| if not x.is_cuda: | |
| raise ValueError("Input tensor must be on CUDA device") | |
| if x.dim() != 2: | |
| raise ValueError(f"Expected 2D tensor, got {x.dim()}D") | |
| if not x.is_cuda: | |
| raise ValueError("Input tensor must be on CUDA device") | |
| if not x.is_contiguous(): | |
| raise ValueError("Input tensor must be contiguous") |
Instead of requiring NVTE_WITH_CUSOLVERMP env var to be set for both the common library and PyTorch extension builds, inspect the already-built libtransformer_engine.so for exported symbols. This is more robust for incremental builds and CI environments where the env var may not be propagated to the extension build step. The PyTorch extension only calls nvte_* C API functions, so it does not need cusolverMp headers or libraries — only the compile definition. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> Signed-off-by: Vladimir Cherepanov <vcherepanov@nvidia.com>
When NVTE_WITH_CUSOLVERMP is not defined, omit the Newton-Schulz functions entirely from the pybind module instead of registering stubs that throw runtime errors. The Python wrapper checks for the attribute at call time and raises a clear error message. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> Signed-off-by: Vladimir Cherepanov <vcherepanov@nvidia.com>
for more information, see https://pre-commit.ci
| if x.dim() != 2: | ||
| raise ValueError(f"Expected 2D tensor, got {x.dim()}D") | ||
| if not x.is_cuda: | ||
| raise ValueError("Input tensor must be on CUDA device") |
There was a problem hiding this comment.
Missing dtype validation - docstring on line 36 states tensor must be float32 or bfloat16, but this isn't enforced. Passing unsupported dtypes leads to confusing errors from cuSolverMp.
| if x.dim() != 2: | |
| raise ValueError(f"Expected 2D tensor, got {x.dim()}D") | |
| if not x.is_cuda: | |
| raise ValueError("Input tensor must be on CUDA device") | |
| if x.dim() != 2: | |
| raise ValueError(f"Expected 2D tensor, got {x.dim()}D") | |
| if not x.is_cuda: | |
| raise ValueError("Input tensor must be on CUDA device") | |
| if x.dtype not in (torch.float32, torch.bfloat16): | |
| raise ValueError(f"Input tensor must be float32 or bfloat16, got {x.dtype}") |
Raise FileNotFoundError when no libtransformer_engine.so is found in any candidate location, and raise RuntimeError when nm is unavailable or exits non-zero, rather than silently returning False in both cases. Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com> Signed-off-by: Vladimir Cherepanov <vcherepanov@nvidia.com>
| if dtype == "bfloat16": | ||
| test_cmd += ["--atol=5e-2", "--rtol=5e-2"] | ||
|
|
||
| result = subprocess.run(test_cmd, env=os.environ, capture_output=True, check=False) |
There was a problem hiding this comment.
no timeout on subprocess - if the distributed test deadlocks or hangs (e.g., due to NCCL communication issues), this will block CI indefinitely. Add timeout=300 or similar.
| PATHS ${CUSOLVERMP_DIR} | ||
| PATH_SUFFIXES lib | ||
| REQUIRED) | ||
| target_link_libraries(transformer_engine PUBLIC ${CUSOLVERMP_LIB}) |
There was a problem hiding this comment.
PUBLIC linkage exposes cuSOLVERMp to all downstream consumers of transformer_engine library. Since newton_schulz.h doesn't expose cuSOLVERMp types in the public API, PRIVATE linkage would provide better encapsulation (consumers don't need cuSOLVERMp at link time).
Note: If this suggestion doesn't match your team's coding style, reply to this and let me know. I'll remember it for next time!
In common_lib_has_symbol, prepend a candidate derived by importing transformer_engine via importlib.util.find_spec and using the package directory as the root. This correctly resolves the SO path for source and PyPI installs (where it lives inside transformer_engine/), before falling back to the repo-root and CMake build dir candidates. Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com> Signed-off-by: Vladimir Cherepanov <vcherepanov@nvidia.com>
| const int64_t mb = (m + ctx->nranks - 1) / ctx->nranks; | ||
| const int64_t nb = n; | ||
|
|
||
| // Compute local leading dimension | ||
| const int64_t local_rows = cusolverMpNUMROC(m, mb, ctx->rank, 0, ctx->nranks); | ||
| const int64_t lld = std::max(local_rows, static_cast<int64_t>(1)); | ||
|
|
||
| const cudaDataType_t cuda_dtype = get_cuda_dtype(t->dtype()); | ||
|
|
||
| // Create matrix descriptor | ||
| auto mat_desc = MakeCusolverMpMatrixDesc(ctx->grid.get(), cuda_dtype, m, n, mb, nb, 0, 0, lld); |
There was a problem hiding this comment.
Row-major vs. column-major layout mismatch
lld is set to local_rows, which is the column-major (Fortran/LAPACK) leading-dimension convention for a local_rows × n matrix. However, PyTorch tensors are row-major (C-contiguous) by default, where the correct leading dimension is n (number of columns).
When cuSolverMp reads the data pointer assuming lld = local_rows (column-major) but the data is actually laid out row-major, it will silently mis-interpret every element [i,j]:
- Expected offset (row-major):
i * n + j - What cuSolverMp sees (column-major,
lld = local_rows):j * local_rows + i
The test matrix happens to be symmetric (A = Q Λ Qᵀ), so A^T = A and the polar factor is also symmetric, which can mask this bug. For any non-symmetric rectangular matrix the result would be wrong.
If cuSolverMp requires column-major input, the caller should transpose the tensor before calling (or the API should accept a row-major flag). If cuSolverMp supports row-major, lld should be n:
// For row-major PyTorch tensors (C-contiguous):
const int64_t lld = n;Please verify the expected memory layout against the cuSolverMp documentation and update accordingly, and add a non-symmetric test case to catch this class of bug.
| NVTE_CHECK_CUSOLVERMP(cusolverMpNewtonSchulz( | ||
| ctx->handle.get(), ns_desc.get(), m, n, t->data.dptr, 1, 1, mat_desc.get(), num_iterations, | ||
| coefficients, CUDA_R_32F, ctx->workspace, ctx->workspace_size, workspace_host.data(), | ||
| workspace_host.size(), nullptr)); |
There was a problem hiding this comment.
nullptr devInfo suppresses convergence diagnostics
The last argument to cusolverMpNewtonSchulz is the device info array (devInfo). Passing nullptr means the library will not write convergence or per-iteration status back to the caller. If Newton-Schulz fails to converge or encounters a numerical issue, the NVTE_CHECK_CUSOLVERMP macro will only catch a non-CUSOLVER_STATUS_SUCCESS return code — convergence warnings or soft failures that still return SUCCESS will be silently swallowed.
Consider allocating a small device integer and checking it after the call:
int* devInfo = nullptr;
NVTE_CHECK_CUDA(cudaMalloc(&devInfo, sizeof(int)));
NVTE_CHECK_CUSOLVERMP(cusolverMpNewtonSchulz(
..., devInfo));
int h_info = 0;
NVTE_CHECK_CUDA(cudaMemcpy(&h_info, devInfo, sizeof(int), cudaMemcpyDeviceToHost));
NVTE_CHECK(h_info == 0, "cusolverMpNewtonSchulz devInfo = ", h_info);
cudaFree(devInfo);This would make convergence failures clearly visible to the user.
| NVTECusolverMpCtx* nvte_cusolvermp_ctx_create(ncclComm_t comm, int nranks, int rank) { | ||
| NVTE_API_CALL(nvte_cusolvermp_ctx_create); | ||
| int device_id{}; | ||
| NVTE_CHECK_CUDA(cudaGetDevice(&device_id)); | ||
|
|
||
| cudaStream_t stream{}; | ||
| NVTE_CHECK_CUDA(cudaStreamCreate(&stream)); | ||
|
|
||
| cudaEvent_t in_ready{}; | ||
| NVTE_CHECK_CUDA(cudaEventCreate(&in_ready)); | ||
| cudaEvent_t out_ready{}; | ||
| NVTE_CHECK_CUDA(cudaEventCreate(&out_ready)); | ||
|
|
||
| auto handle = MakeCusolverMpHandle(device_id, stream); | ||
| auto grid = MakeCusolverMpGrid(handle.get(), comm, nranks, 1, CUSOLVERMP_GRID_MAPPING_COL_MAJOR); | ||
|
|
||
| return new NVTECusolverMpCtx{ | ||
| .nranks = nranks, | ||
| .rank = rank, | ||
| .stream = stream, | ||
| .in_ready = in_ready, | ||
| .out_ready = out_ready, | ||
| .handle = std::move(handle), | ||
| .grid = std::move(grid), | ||
| .workspace = nullptr, | ||
| .workspace_size = 0, | ||
| }; |
There was a problem hiding this comment.
Resource leak on exception in nvte_cusolvermp_ctx_create
The raw CUDA handles stream, in_ready, and out_ready are created with plain C API calls before being moved into the NVTECusolverMpCtx struct. If MakeCusolverMpHandle or MakeCusolverMpGrid throw (via NVTE_CHECK_CUSOLVERMP → NVTE_ERROR), the destructor for NVTECusolverMpCtx is never called and these three CUDA resources leak.
Since the exception unwinds the stack before reaching the return new NVTECusolverMpCtx{...} line, there is no way for the nvte_cusolvermp_ctx_destroy path to clean them up.
Wrapping each handle in its own RAII type (similar to the existing CusolverMpHandle) would ensure safe cleanup on any early-exit path:
struct CudaStreamDeleter {
void operator()(cudaStream_t s) const { cudaStreamDestroy(s); }
};
using CudaStream = std::unique_ptr<std::remove_pointer_t<cudaStream_t>, CudaStreamDeleter>;
// ... similar for cudaEvent_t
build_tools/utils.py
Outdated
| f"'nm' failed on {lib_path} (exit code {e.returncode}):\n{e.stderr}" | ||
| ) from e | ||
|
|
||
| return symbol in result.stdout |
There was a problem hiding this comment.
Substring match can produce false positives
symbol in result.stdout does a plain substring search over the entire nm output. If the library ever contains a symbol that has the target symbol as a prefix (e.g. nvte_cusolvermp_ctx_create_with_options), this check will incorrectly return True and enable cuSolverMp support in the PyTorch extension even though the real symbol is absent.
The nm -D output format is <address> <type> <name>\n per line. A safer check is to match against word boundaries:
import re
return bool(re.search(r'\b' + re.escape(symbol) + r'\b', result.stdout))or equivalently test for whitespace/line boundaries:
return any(line.split()[-1] == symbol for line in result.stdout.splitlines() if line.strip())
Additional Comments (1)
The CMake convention is to declare |
build_tools/utils.py
Outdated
| candidates = [] | ||
| try: | ||
| te_spec = importlib.util.find_spec("transformer_engine") | ||
| print(f"TE_SPEC: {te_spec}") |
There was a problem hiding this comment.
Debug print statement left in production code.
print(f"TE_SPEC: {te_spec}") is a debug artifact that will pollute build output for every user who builds the PyTorch extension. It should be removed before merging.
| print(f"TE_SPEC: {te_spec}") | |
| if te_spec is not None and te_spec.origin is not None: |
| if (NVTE_WITH_CUSOLVERMP) | ||
| list(APPEND transformer_engine_SOURCES | ||
| newton_schulz/newton_schulz.cpp) | ||
| endif() |
There was a problem hiding this comment.
option() declared after its first use.
Every other optional feature in this file (NVTE_UB_WITH_MPI, NVTE_ENABLE_NVSHMEM, NVTE_WITH_CUBLASMP) follows the pattern of declaring the option() immediately before the if() that uses it. Here, option(NVTE_WITH_CUSOLVERMP ...) is declared ~78 lines later at line 308, after the source-file list is already conditionally extended on line 230.
While this works when -DNVTE_WITH_CUSOLVERMP=ON is passed on the CMake command line (the cache variable is already set before this file processes), it is inconsistent with the established pattern and could surprise developers who add follow-on logic. Moving the option() declaration to just before line 230 would make the file consistent.
Note: If this suggestion doesn't match your team's coding style, reply to this and let me know. I'll remember it for next time!
| #ifndef TRANSFORMER_ENGINE_COMMON_NEWTON_SCHULZ_H_ | ||
| #define TRANSFORMER_ENGINE_COMMON_NEWTON_SCHULZ_H_ | ||
|
|
||
| #include <nccl.h> |
There was a problem hiding this comment.
Unconditional #include <nccl.h> in a public header.
newton_schulz.h is installed as a public header (under include/transformer_engine/). The unconditional #include <nccl.h> means that any downstream project that includes this header — even one with no interest in Newton-Schulz — now requires NCCL in its include path.
ncclComm_t is only used in the function signatures of nvte_cusolvermp_ctx_create and nvte_newton_schulz, which are themselves only meaningful when NVTE_WITH_CUSOLVERMP is defined. Guarding the include and the declarations together would prevent the leakage:
| #include <nccl.h> | |
| #ifdef NVTE_WITH_CUSOLVERMP | |
| #include <nccl.h> | |
| // ... struct and function declarations ... | |
| #endif // NVTE_WITH_CUSOLVERMP |
Signed-off-by: Vladimir Cherepanov <vcherepanov@nvidia.com>
f11c55f to
2cf55f6
Compare
for more information, see https://pre-commit.ci
Additional Comments (3)
When This scenario is realistic: the common library might be built in a non-standard path, or a developer may be doing an incremental build of only the PyTorch extension. The function should return if lib_path is None:
import warnings
warnings.warn(
"Could not find libtransformer_engine.so; cuSOLVERMp auto-detection disabled.",
stacklevel=2,
)
return FalseThe
The file correctly skips when fewer than 2 GPUs are present, but there is no skip when TE was built without A simple guard at module level (or inside the test) would mirror the GPU-count check: import transformer_engine_torch as tex
if not hasattr(tex, "newton_schulz"):
pytest.skip(
"Newton-Schulz tests require TE built with NVTE_WITH_CUSOLVERMP=1.",
allow_module_level=True,
)
When If 15-iteration runs use the test's custom coefficients (which are different from both defaults), the fallback path is also never exercised by the current tests with the default API — making the generic path untested in practice. Consider either:
|
This reverts commit 8f50bd5.
Additional Comments (12)
The test is added without any guard for Add a conditional:
The file skips when fewer than 2 GPUs are available, but not when Transformer Engine is built without Add a module-level skip check:
The cuSolverMp block silently defaults to Align with the NVSHMEM pattern:
Lines 20-21 use Consider using public APIs or documenting this dependency clearly in comments, noting that this code may need updates with new PyTorch releases.
The assertion on line 65 validates user input. Assertions can be disabled with Python's
The C++ code calls Also add dtype validation since the docstring specifies
The distributed test subprocess has no timeout. If it deadlocks (e.g., NCCL communication issue), the test will block indefinitely, hanging the CI job. Add a timeout:
cuSOLVERMp is linked with
The Move the
Guard this import similarly to other optional features, or add a note in the docstring that
Raw CUDA handles ( Wrap each handle in an RAII type (similar to |
Signed-off-by: Vladimir Cherepanov <vcherepanov@nvidia.com>
Additional Comments (4)
When Compare to the if bool(int(os.getenv("NVTE_WITH_CUSOLVERMP", "0"))):
assert (
os.getenv("CUSOLVERMP_HOME") is not None
), "CUSOLVERMP_HOME must be set when compiling with NVTE_WITH_CUSOLVERMP=1"
cusolvermp_home = Path(os.getenv("CUSOLVERMP_HOME"))
...
This test is added unconditionally to the QA script, so it will always execute even when TE is built without Add a build-flag guard matching the build configuration: if [ "${NVTE_WITH_CUSOLVERMP:-0}" = "1" ]; then
python3 -m pytest -v -s --junitxml=$XML_LOG_DIR/pytest_test_newton_schulz.xml $TE_PATH/tests/pytorch/distributed/test_newton_schulz.py || test_fail "test_newton_schulz.py"
fi
The test only skips when fewer than 2 GPUs are available, but does not check whether TE was built with Add an early skip guard: import transformer_engine_torch as tex
if not hasattr(tex, "newton_schulz"):
pytest.skip("Newton-Schulz tests require TE built with NVTE_WITH_CUSOLVERMP=1.", allow_module_level=True)
When Consider either:
|
Description
Adds an API to call Newton-Schulz method on a distributed tensor.
Fixes # (issue)
Type of change
Changes
Please list the changes introduced in this PR:
Checklist: